Buffalo
- Asia > China (0.04)
- North America > United States > New York > Erie County > Buffalo (0.04)
- Africa > Middle East > Egypt (0.28)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.27)
- Europe > France (0.14)
- (96 more...)
- Research Report > New Finding (1.00)
- Personal > Honors (0.94)
- Transportation > Air (1.00)
- Media > Music (1.00)
- Media > Film (1.00)
- (22 more...)
- North America > United States > New York > Erie County > Buffalo (0.04)
- Asia > Middle East > Jordan (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- North America > United States > New York > Erie County > Buffalo (0.04)
- North America > Mexico > Mexico City > Mexico City (0.04)
- Europe > Switzerland > Zürich > Zürich (0.05)
- North America > United States > New York > Erie County > Buffalo (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (2 more...)
- Education > Educational Setting (0.46)
- Government > Regional Government (0.46)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Game Theory (0.70)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.69)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.46)
Parametric RDT approach to computational gap of symmetric binary perceptron
We study potential presence of statistical-computational gaps (SCG) in symmetric binary perceptrons (SBP) via a parametric utilization of \emph{fully lifted random duality theory} (fl-RDT) [96]. A structural change from decreasingly to arbitrarily ordered $c$-sequence (a key fl-RDT parametric component) is observed on the second lifting level and associated with \emph{satisfiability} ($α_c$) -- \emph{algorithmic} ($α_a$) constraints density threshold change thereby suggesting a potential existence of a nonzero computational gap $SCG=α_c-α_a$. The second level estimate is shown to match the theoretical $α_c$ whereas the $r\rightarrow \infty$ level one is proposed to correspond to $α_a$. For example, for the canonical SBP ($κ=1$ margin) we obtain $α_c\approx 1.8159$ on the second and $α_a\approx 1.6021$ (with converging tendency towards $\sim 1.59$ range) on the seventh level. Our propositions remarkably well concur with recent literature: (i) in [20] local entropy replica approach predicts $α_{LE}\approx 1.58$ as the onset of clustering defragmentation (presumed driving force behind locally improving algorithms failures); (ii) in $α\rightarrow 0$ regime we obtain on the third lifting level $κ\approx 1.2385\sqrt{\frac{α_a}{-\log\left ( α_a \right ) }}$ which qualitatively matches overlap gap property (OGP) based predictions of [43] and identically matches local entropy based predictions of [24]; (iii) $c$-sequence ordering change phenomenology mirrors the one observed in asymmetric binary perceptron (ABP) in [98] and the negative Hopfield model in [100]; and (iv) as in [98,100], we here design a CLuP based algorithm whose practical performance closely matches proposed theoretical predictions.
- North America > United States > Colorado > Denver County > Denver (0.04)
- Africa > Sudan (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- (10 more...)
FP64 is All You Need: Rethinking Failure Modes in Physics-Informed Neural Networks
Xu, Chenhui, Liu, Dancheng, Nassereldine, Amir, Xiong, Jinjun
Physics Informed Neural Networks (PINNs) often exhibit failure modes in which the PDE residual loss converges while the solution error stays large, a phenomenon traditionally blamed on local optima separated from the true solution by steep loss barriers. We challenge this understanding by demonstrate that the real culprit is insufficient arithmetic precision: with standard FP32, the LBFGS optimizer prematurely satisfies its convergence test, freezing the network in a spurious failure phase. Simply upgrading to FP64 rescues optimization, enabling vanilla PINNs to solve PDEs without any failure modes. These results reframe PINN failure modes as precision induced stalls rather than inescapable local minima and expose a three stage training dynamic unconverged, failure, success whose boundaries shift with numerical precision. Our findings emphasize that rigorous arithmetic precision is the key to dependable PDE solving with neural networks.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Assessment of deep learning models integrated with weather and environmental variables for wildfire spread prediction and a case study of the 2023 Maui fires
Kim, Jiyeon, Hu, Yingjie, Elhami-Khorasani, Negar, Sun, Kai, Zhou, Ryan Zhenqi
Predicting the spread of wildfires is essential for effective fire management and risk assessment. With the fast advancements of artificial intelligence (AI), various deep learning models have been developed and utilized for wildfire spread prediction. However, there is limited understanding of the advantages and limitations of these models, and it is also unclear how deep learning-based fire spread models can be compared with existing non-AI fire models. In this work, we assess the ability of five typical deep learning models integrated with weather and environmental variables for wildfire spread prediction based on over ten years of wildfire data in the state of Hawaii. We further use the 2023 Maui fires as a case study to compare the best deep learning models with a widely-used fire spread model, FARSITE. The results show that two deep learning models, i.e., ConvLSTM and ConvLSTM with attention, perform the best among the five tested AI models. FARSITE shows higher precision, lower recall, and higher F1-score than the best AI models, while the AI models offer higher flexibility for the input data. By integrating AI models with an explainable AI method, we further identify important weather and environmental factors associated with the 2023 Maui wildfires.
- North America > United States > Hawaii > Maui County > Lahaina (0.05)
- North America > United States > New York > Erie County > Buffalo (0.04)
- North America > United States > Rocky Mountains (0.04)
- (7 more...)
QAL: A Loss for Recall Precision Balance in 3D Reconstruction
Meshram, Pranay, Turkar, Yash, Singh, Kartikeya, Masilamani, Praveen Raj, Adhivarahan, Charuvahan, Dantu, Karthik
V olumetric learning underpins many 3D vision tasks such as completion, reconstruction, and mesh generation, yet training objectives still rely on Chamfer Distance (CD) or Earth Mover's Distance (EMD), which fail to balance recall and precision. W e propose Quality-Aware Loss (QAL), a drop-in replacement for CD/EMD that combines a coverage-weighted nearest-neighbor term with an uncovered-ground-truth attraction term, explicitly decou-pling recall and precision into tunable components. Across diverse pipelines, QAL achieves consistent coverage gains, improving by an average of +4.3 pts over CD and +2.8 pts over the best alternatives. Though modest in percentage, these improvements reliably recover thin structures and under-represented regions that CD/EMD overlook. Extensive ablations confirm stable performance across hyper-parameters and across output resolutions, while full retraining on PCN and ShapeNet demonstrates generalization across datasets and backbones. Moreover, QAL-trained completions yield higher grasp scores under GraspNet evaluation, showing that improved coverage translates directly into more reliable robotic manipulation. QAL thus offers a principled, interpretable, and practical objective for robust 3D vision and safety-critical robotics pipelines.
- North America > United States > New York > Erie County > Buffalo (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.46)